4 research outputs found
Bag of Tricks for Fully Test-Time Adaptation
Fully Test-Time Adaptation (TTA), which aims at adapting models to data
drifts, has recently attracted wide interest. Numerous tricks and techniques
have been proposed to ensure robust learning on arbitrary streams of unlabeled
data. However, assessing the true impact of each individual technique and
obtaining a fair comparison still constitutes a significant challenge. To help
consolidate the community's knowledge, we present a categorization of selected
orthogonal TTA techniques, including small batch normalization, stream
rebalancing, reliable sample selection, and network confidence calibration. We
meticulously dissect the effect of each approach on different scenarios of
interest. Through our analysis, we shed light on trade-offs induced by those
techniques between accuracy, the computational power required, and model
complexity. We also uncover the synergy that arises when combining techniques
and are able to establish new state-of-the-art results.Comment: Accepted at WACV 202
Automatic Data Augmentation Learning using Bilevel Optimization for Histopathological Images
Training a deep learning model to classify histopathological images is
challenging, because of the color and shape variability of the cells and
tissues, and the reduced amount of available data, which does not allow proper
learning of those variations. Variations can come from the image acquisition
process, for example, due to different cell staining protocols or tissue
deformation. To tackle this challenge, Data Augmentation (DA) can be used
during training to generate additional samples by applying transformations to
existing ones, to help the model become invariant to those color and shape
transformations. The problem with DA is that it is not only dataset-specific
but it also requires domain knowledge, which is not always available. Without
this knowledge, selecting the right transformations can only be done using
heuristics or through a computationally demanding search. To address this, we
propose an automatic DA learning method. In this method, the DA parameters,
i.e. the transformation parameters needed to improve the model training, are
considered learnable and are learned automatically using a bilevel optimization
approach in a quick and efficient way using truncated backpropagation. We
validated the method on six different datasets. Experimental results show that
our model can learn color and affine transformations that are more helpful to
train an image classifier than predefined DA transformations, which are also
more expensive as they need to be selected before the training by grid search
on a validation set. We also show that similarly to a model trained with
RandAugment, our model has also only a few method-specific hyperparameters to
tune but is performing better. This makes our model a good solution for
learning the best DA parameters, especially in the context of histopathological
images, where defining potentially useful transformation heuristically is not
trivial.Comment: arXiv admin note: text overlap with arXiv:2006.1469